Dengue fever is a virulent disease spreading over 100 tropical and subtropical countries in Africa, the Americas, and Asia. This arboviral disease affects around 400 million people globally, severely distressing the healthcare systems. The unavailability of a specific drug and ready-to-use vaccine makes the situation worse. Hence, policymakers must rely on early warning systems to control intervention-related decisions. Forecasts routinely provide critical information for dangerous epidemic events. However, the available forecasting models (e.g., weather-driven mechanistic, statistical time series, and machine learning models) lack a clear understanding of different components to improve prediction accuracy and often provide unstable and unreliable forecasts. This study proposes an ensemble wavelet neural network with exogenous factor(s) (XEWNet) model that can produce reliable estimates for dengue outbreak prediction for three geographical regions, namely San Juan, Iquitos, and Ahmedabad. The proposed XEWNet model is flexible and can easily incorporate exogenous climate variable(s) confirmed by statistical causality tests in its scalable framework. The proposed model is an integrated approach that uses wavelet transformation into an ensemble neural network framework that helps in generating more reliable long-term forecasts. The proposed XEWNet allows complex non-linear relationships between the dengue incidence cases and rainfall; however, mathematically interpretable, fast in execution, and easily comprehensible. The proposal's competitiveness is measured using computational experiments based on various statistical metrics and several statistical comparison tests. In comparison with statistical, machine learning, and deep learning methods, our proposed XEWNet performs better in 75% of the cases for short-term and long-term forecasting of dengue incidence.
translated by 谷歌翻译
由于在许多领域的无与伦比的成功,例如计算机视觉,自然语言处理,推荐系统以及最近在模拟多物理问题和预测非线性动力学系统方面,深度学习引起了人们的关注。但是,建模和预测混乱系统的动态仍然是一个开放的研究问题,因为训练深度学习模型需要大数据,在许多情况下,这并不总是可用的。可以通过从模拟结果获得的其他信息以及执行混乱系统的物理定律来培训这样的深度学习者。本文考虑了极端事件及其动态,并提出了基于深层神经网络的优雅模型,称为基于知识的深度学习(KDL)。我们提出的KDL可以通过直接从动力学及其微分方程中对真实和模拟数据进行联合培训来学习控制混乱系统的复杂模式。这些知识被转移到模型和预测现实世界中的混乱事件,表现出极端行为。我们通过在三个实际基准数据集上进行评估来验证模型的效率:El Nino海面温度,San Juan登革热病毒感染和BJ {\ o} rn {\ o} ya每日降水,所有这些都受极端事件的控制'动态。利用对极端事件和基于物理的损失功能的先验知识来领导神经网络学习,我们即使在小型数据制度中也可以确保身体一致,可推广和准确的预测。
translated by 谷歌翻译
使用变压器的深度学习最近在许多重要领域取得了很大的成功,例如自然语言处理,计算机视觉,异常检测和推荐系统等。在变压器的几种优点中,对于时间序列预测,捕获远程时间依赖性和相互作用的能力是可取的,从而导致其在各种时间序列应用中的进步。在本文中,我们为非平稳时间序列构建了变压器模型。这个问题具有挑战性,但至关重要。我们为基于小波的变压器编码器体系结构提供了一个新颖的单变量时间序列表示学习框架,并将其称为W-Transformer。所提出的W-Transformer使用最大重叠离散小波转换(MODWT)到时间序列数据,并在分解数据集上构建本地变压器,以生动地捕获时间序列中的非机构性和远程非线性依赖性。在来自各个领域的几个公共基准时间序列数据集和具有不同特征的几个公开基准时间序列数据集上评估我们的框架,我们证明它的平均表现明显优于短期和长期预测的基线预报器,即使是由包含的数据集组成的数据集只有几百个培训样本。
translated by 谷歌翻译
传染病仍然是全世界人类疾病和死亡的主要因素之一,其中许多疾病引起了流行的感染波。特定药物和预防疫苗防止大多数流行病的不可用,这使情况变得更糟。这些迫使公共卫生官员,卫生保健提供者和政策制定者依靠由流行病的可靠预测产生的预警系统。对流行病的准确预测可以帮助利益相关者调整对手的对策,例如疫苗接种运动,人员安排和资源分配,以减少手头的情况,这可以转化为减少疾病影响的影响。不幸的是,大多数过去的流行病(例如,登革热,疟疾,肝炎,流感和最新的Covid-19)表现出非线性和非平稳性特征,这是由于它们基于季节性依赖性变化以及这些流行病的性质的扩散波动而引起的。 。我们使用基于最大的重叠离散小波变换(MODWT)自动回归神经网络分析了各种流行时期时间序列数据集,并将其称为EWNET。 MODWT技术有效地表征了流行时间序列中的非平稳行为和季节性依赖性,并在拟议的集合小波网络框架中改善了自回旋神经网络的预测方案。从非线性时间序列的角度来看,我们探讨了所提出的EWNET模型的渐近平稳性,以显示相关的马尔可夫链的渐近行为。我们还理论上还研究了学习稳定性的效果以及在拟议的EWNET模型中选择隐藏的神经元的选择。从实际的角度来看,我们将我们提出的EWNET框架与以前用于流行病预测的几种统计,机器学习和深度学习模型进行了比较。
translated by 谷歌翻译
预测时间序列数据代表了数据科学和知识发现研究的新兴领域,其广泛应用程序从股票价格和能源需求预测到早期预测流行病。在过去的五十年中,已经提出了许多统计和机器学习方法,对高质量和可靠预测的需求。但是,在现实生活中的预测问题中,存在基于上述范式之一的模型是可取的。因此,需要混合解决方案来弥合经典预测方法与现代神经网络模型之间的差距。在这种情况下,我们介绍了一个概率自回归神经网络(PARNN)模型,该模型可以处理各种复杂的时间序列数据(例如,非线性,非季节性,远程依赖性和非平稳性)。拟议的PARNN模型是通过建立综合运动平均值和自回归神经网络的融合来构建的,以保持个人的解释性,可伸缩性和``白色盒子样''的预测行为。通过考虑相关的马尔可夫链的渐近行为,获得了渐近平稳性和几何形状的足够条件。与先进的深度学习工具不同,基于预测间隔的PARNN模型的不确定性量化。在计算实验期间,Parnn在各种各样的现实世界数据集中,超过了标准统计,机器学习和深度学习模型(例如,变形金刚,Nbeats,Deepar等),来自宏观经济学,旅游,能源,流行病学和其他人的真实数据集集合 - 期,中期和长期预测。与最先进的预报相比,与最佳方法相比,与最佳方法进行了多重比较,以展示该提案的优越性。
translated by 谷歌翻译
A framework for creating and updating digital twins for dynamical systems from a library of physics-based functions is proposed. The sparse Bayesian machine learning is used to update and derive an interpretable expression for the digital twin. Two approaches for updating the digital twin are proposed. The first approach makes use of both the input and output information from a dynamical system, whereas the second approach utilizes output-only observations to update the digital twin. Both methods use a library of candidate functions representing certain physics to infer new perturbation terms in the existing digital twin model. In both cases, the resulting expressions of updated digital twins are identical, and in addition, the epistemic uncertainties are quantified. In the first approach, the regression problem is derived from a state-space model, whereas in the latter case, the output-only information is treated as a stochastic process. The concepts of It\^o calculus and Kramers-Moyal expansion are being utilized to derive the regression equation. The performance of the proposed approaches is demonstrated using highly nonlinear dynamical systems such as the crack-degradation problem. Numerical results demonstrated in this paper almost exactly identify the correct perturbation terms along with their associated parameters in the dynamical system. The probabilistic nature of the proposed approach also helps in quantifying the uncertainties associated with updated models. The proposed approaches provide an exact and explainable description of the perturbations in digital twin models, which can be directly used for better cyber-physical integration, long-term future predictions, degradation monitoring, and model-agnostic control.
translated by 谷歌翻译
We propose a novel model agnostic data-driven reliability analysis framework for time-dependent reliability analysis. The proposed approach -- referred to as MAntRA -- combines interpretable machine learning, Bayesian statistics, and identifying stochastic dynamic equation to evaluate reliability of stochastically-excited dynamical systems for which the governing physics is \textit{apriori} unknown. A two-stage approach is adopted: in the first stage, an efficient variational Bayesian equation discovery algorithm is developed to determine the governing physics of an underlying stochastic differential equation (SDE) from measured output data. The developed algorithm is efficient and accounts for epistemic uncertainty due to limited and noisy data, and aleatoric uncertainty because of environmental effect and external excitation. In the second stage, the discovered SDE is solved using a stochastic integration scheme and the probability failure is computed. The efficacy of the proposed approach is illustrated on three numerical examples. The results obtained indicate the possible application of the proposed approach for reliability analysis of in-situ and heritage structures from on-site measurements.
translated by 谷歌翻译
Transformer layers, which use an alternating pattern of multi-head attention and multi-layer perceptron (MLP) layers, provide an effective tool for a variety of machine learning problems. As the transformer layers use residual connections to avoid the problem of vanishing gradients, they can be viewed as the numerical integration of a differential equation. In this extended abstract, we build upon this connection and propose a modification of the internal architecture of a transformer layer. The proposed model places the multi-head attention sublayer and the MLP sublayer parallel to each other. Our experiments show that this simple modification improves the performance of transformer networks in multiple tasks. Moreover, for the image classification task, we show that using neural ODE solvers with a sophisticated integration scheme further improves performance.
translated by 谷歌翻译
Consider a scenario in one-shot query-guided object localization where neither an image of the object nor the object category name is available as a query. In such a scenario, a hand-drawn sketch of the object could be a choice for a query. However, hand-drawn crude sketches alone, when used as queries, might be ambiguous for object localization, e.g., a sketch of a laptop could be confused for a sofa. On the other hand, a linguistic definition of the category, e.g., a small portable computer small enough to use in your lap" along with the sketch query, gives better visual and semantic cues for object localization. In this work, we present a multimodal query-guided object localization approach under the challenging open-set setting. In particular, we use queries from two modalities, namely, hand-drawn sketch and description of the object (also known as gloss), to perform object localization. Multimodal query-guided object localization is a challenging task, especially when a large domain gap exists between the queries and the natural images, as well as due to the challenge of combining the complementary and minimal information present across the queries. For example, hand-drawn crude sketches contain abstract shape information of an object, while the text descriptions often capture partial semantic information about a given object category. To address the aforementioned challenges, we present a novel cross-modal attention scheme that guides the region proposal network to generate object proposals relevant to the input queries and a novel orthogonal projection-based proposal scoring technique that scores each proposal with respect to the queries, thereby yielding the final localization results. ...
translated by 谷歌翻译
We consider the stochastic linear contextual bandit problem with high-dimensional features. We analyze the Thompson sampling (TS) algorithm, using special classes of sparsity-inducing priors (e.g. spike-and-slab) to model the unknown parameter, and provide a nearly optimal upper bound on the expected cumulative regret. To the best of our knowledge, this is the first work that provides theoretical guarantees of Thompson sampling in high dimensional and sparse contextual bandits. For faster computation, we use spike-and-slab prior to model the unknown parameter and variational inference instead of MCMC to approximate the posterior distribution. Extensive simulations demonstrate improved performance of our proposed algorithm over existing ones.
translated by 谷歌翻译